Visual Performance Fields: Frames of Reference
نویسندگان
چکیده
Performance in most visual discrimination tasks is better along the horizontal than the vertical meridian (Horizontal-Vertical Anisotropy, HVA), and along the lower than the upper vertical meridian (Vertical Meridian Asymmetry, VMA), with intermediate performance at intercardinal locations. As these inhomogeneities are prevalent throughout visual tasks, it is important to understand the perceptual consequences of dissociating spatial reference frames. In all studies of performance fields so far, allocentric environmental references and egocentric observer reference frames were aligned. Here we quantified the effects of manipulating head-centric and retinotopic coordinates on the shape of visual performance fields. When observers viewed briefly presented radial arrays of Gabors and discriminated the tilt of a target relative to homogeneously oriented distractors, performance fields shifted with head tilt (Experiment 1), and fixation (Experiment 2). These results show that performance fields shift in-line with egocentric referents, corresponding to the retinal location of the stimulus.
منابع مشابه
A Machine Learning Approach to No-Reference Objective Video Quality Assessment for High Definition Resources
The video quality assessment must be adapted to the human visual system, which is why researchers have performed subjective viewing experiments in order to obtain the conditions of encoding of video systems to provide the best quality to the user. The objective of this study is to assess the video quality using image features extraction without using reference video. RMSE values and processing ...
متن کاملFrontal eye fields involved in shifting frame of reference within working memory for scenes.
Working memory (WM) evoked by linguistic cues for allocentric spatial and egocentric spatial aspects of a visual scene was investigated by correlating fMRI BOLD signal (or "activation") with performance on a spatial-relations task. Subjects indicated the relative positions of a person or object (referenced by the personal pronouns "he/she/it") in a previously shown image relative to either them...
متن کاملEye - centered , head - centered , and complex coding of visual and auditory targets in the intraparietal sulcus . Running title : Auditory and visual coding in the intraparietal sulcus
The integration of visual and auditory events is thought to require a joint representation of visual and auditory space in a common reference frame. We investigated the coding of visual and auditory space in the lateral and medial intraparietal areas (LIP, MIP) as a candidate for such a representation. We recorded the activity of 275 neurons in LIP and MIP of two monkeys while they performed sa...
متن کاملEye-centered visual receptive fields in the ventral intraparietal area.
The ventral intraparietal area (VIP) processes multisensory visual, vestibular, tactile, and auditory signals in diverse reference frames. We recently reported that visual heading signals in VIP are represented in an approximately eye-centered reference frame when measured using large-field optic flow stimuli. No VIP neuron was found to have head-centered visual heading tuning, and only a small...
متن کاملEye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus.
The integration of visual and auditory events is thought to require a joint representation of visual and auditory space in a common reference frame. We investigated the coding of visual and auditory space in the lateral and medial intraparietal areas (LIP, MIP) as a candidate for such a representation. We recorded the activity of 275 neurons in LIP and MIP of two monkeys while they performed sa...
متن کامل